1,090 research outputs found

    Kinematics and dynamics are not represented independently in motor working memory: Evidence from an interference study

    Get PDF
    Our capacity to learn multiple dynamic and visuomotor tasks is limited by the time between the presentations of the tasks. When subjects are required to adapt to equal and opposite position-dependent visuomotor rotations (Krakauer et al., 1999) or velocity-dependent force fields (Brashers-Krug et al., 1996) in quick succession, interference occurs that prevents the first task from being consolidated in memory. In contrast, such interference is not observed between learning a positiondependent visuomotor rotation and an acceleration-dependent force field. On the basis of this finding, it has been argued that internal models of kinematic and dynamic sensorimotor transformations are learned independently (Krakauer et al., 1999). However, these findings are also consistent with the perturbations interfering only if they depend on the same kinematic variable. We evaluated this hypothesis using kinematic and dynamic transformations matched in terms of the kinematic variable on which they depend. Subjects adapted to a positiondependent visuomotor rotation followed 5 min later by a position-dependent rotary force field either with or without visual feedback of arm position. The force field tended to rotate the hand in the direction opposite to the visuomotor rotation. To assess learning, all subjects were retested 24 hr later on the visuomotor rotation, and their performance was compared with a control group exposed only to the visuomotor rotation on both days. Adapting to the position-dependent force field, both with and without visual feedback, impaired learning of the visuomotor rotation. Thus, interference between our kinematic and dynamic transformations was observed, suggesting that the key determinant of interference is the kinematic variable on which the transformation depends

    The effect of contextual cues on the encoding of motor memories.

    Get PDF
    Several studies have shown that sensory contextual cues can reduce the interference observed during learning of opposing force fields. However, because each study examined a small set of cues, often in a unique paradigm, the relative efficacy of different sensory contextual cues is unclear. In the present study we quantify how seven contextual cues, some investigated previously and some novel, affect the formation and recall of motor memories. Subjects made movements in a velocity-dependent curl field, with direction varying randomly from trial to trial but always associated with a unique contextual cue. Linking field direction to the cursor or background color, or to peripheral visual motion cues, did not reduce interference. In contrast, the orientation of a visual object attached to the hand cursor significantly reduced interference, albeit by a small amount. When the fields were associated with movement in different locations in the workspace, a substantial reduction in interference was observed. We tested whether this reduction in interference was due to the different locations of the visual feedback (targets and cursor) or the movements (proprioceptive). When the fields were associated only with changes in visual display location (movements always made centrally) or only with changes in the movement location (visual feedback always displayed centrally), a substantial reduction in interference was observed. These results show that although some visual cues can lead to the formation and recall of distinct representations in motor memory, changes in spatial visual and proprioceptive states of the movement are far more effective than changes in simple visual contextual cues

    Motor Planning, Not Execution, Separates Motor Memories

    Get PDF
    Recent theories of limb control emphasize motor cortex as a dynamical system, with planning setting the initial neural state, and execution arising from the self-limiting evolution of the intrinsic neural dynamics. Therefore, movements that share an initial trajectory but then diverge might have different neural states during the execution of the identical initial trajectories. We hypothesized that motor adaptation maps neural states to changes in motor command. This predicts that two opposing perturbations, which interfere when experienced over the same movement, could be learned if each is associated with a different plan even if not executed. We show that planning, but not executing, different follow-through movements allow opposing perturbations to be learned simultaneously over the same movement. However, no learning occurs if different follow throughs are executed, but not planned prior to movement initiation. Our results suggest neural, rather than physical states, are the critical factor associated with motor adaptation.We thank the Wellcome Trust, Royal Society (Noreen Murray Professorship in Neurobiology to D.M.W.), the Cambridge Commonwealth, European and International Trusts and the Rutherford Foundation Trust

    Composition and decomposition in bimanual dynamic learning.

    Get PDF
    Our ability to skillfully manipulate an object often involves the motor system learning to compensate for the dynamics of the object. When the two arms learn to manipulate a single object they can act cooperatively, whereas when they manipulate separate objects they control each object independently. We examined how learning transfers between these two bimanual contexts by applying force fields to the arms. In a coupled context, a single dynamic is shared between the arms, and in an uncoupled context separate dynamics are experienced independently by each arm. In a composition experiment, we found that when subjects had learned uncoupled force fields they were able to transfer to a coupled field that was the sum of the two fields. However, the contribution of each arm repartitioned over time so that, when they returned to the uncoupled fields, the error initially increased but rapidly reverted to the previous level. In a decomposition experiment, after subjects learned a coupled field, their error increased when exposed to uncoupled fields that were orthogonal components of the coupled field. However, when the coupled field was reintroduced, subjects rapidly readapted. These results suggest that the representations of dynamics for uncoupled and coupled contexts are partially independent. We found additional support for this hypothesis by showing significant learning of opposing curl fields when the context, coupled versus uncoupled, was alternated with the curl field direction. These results suggest that the motor system is able to use partially separate representations for dynamics of the two arms acting on a single object and two arms acting on separate objects

    Context-dependent partitioning of motor learning in bimanual movements.

    Get PDF
    Human subjects easily adapt to single dynamic or visuomotor perturbations. In contrast, when two opposing dynamic or visuomotor perturbations are presented sequentially, interference is often observed. We examined the effect of bimanual movement context on interference between opposing perturbations using pairs of contexts, in which the relative direction of movement between the two arms was different across the pair. When each perturbation direction was associated with a different bimanual context, such as movement of the arms in the same direction versus movement in the opposite direction, interference was dramatically reduced. This occurred over a short period of training and was seen for both dynamic and visuomotor perturbations, suggesting a partitioning of motor learning for the different bimanual contexts. Further support for this was found in a series of transfer experiments. Having learned a single dynamic or visuomotor perturbation in one bimanual context, subjects showed incomplete transfer of this learning when the context changed, even though the perturbation remained the same. In addition, we examined a bimanual context in which one arm was moved passively and show that the reduction in interference requires active movement. The sensory consequences of movement are thus insufficient to allow opposing perturbations to be co-represented. Our results suggest different bimanual movement contexts engage at least partially separate representations of dynamics and kinematics in the motor system

    Statistics of Natural Movements Are Reflected in Motor Errors

    Get PDF
    Humans use their arms to engage in a wide variety of motor tasks during everyday life. However, little is known about the statistics of these natural arm movements. Studies of the sensory system have shown that the statistics of sensory inputs are key to determining sensory processing. We hypothesized that the statistics of natural everyday movements may, in a similar way, influence motor performance as measured in laboratory-based tasks. We developed a portable motion-tracking system that could be worn by subjects as they went about their daily routine outside of a laboratory setting. We found that the well-documented symmetry bias is reflected in the relative incidence of movements made during everyday tasks. Specifically, symmetric and antisymmetric movements are predominant at low frequencies, whereas only symmetric movements are predominant at high frequencies. Moreover, the statistics of natural movements, that is, their relative incidence, correlated with subjects' performance on a laboratory-based phase-tracking task. These results provide a link between natural movement statistics and motor performance and confirm that the symmetry bias documented in laboratory studies is a natural feature of human movement. </jats:p

    Integration of visual and joint information to enable linear reaching motions

    Get PDF
    A new dynamics-driven control law was developed for a robot arm, based on the feedback control law which uses the linear transformation directly from work space to joint space. This was validated using a simulation of a two-joint planar robot arm and an optimisation algorithm was used to find the optimum matrix to generate straight trajectories of the end-effector in the work space. We found that this linear matrix can be decomposed into the rotation matrix representing the orientation of the goal direction and the joint relation matrix (MJRM) representing the joint response to errors in the Cartesian work space. The decomposition of the linear matrix indicates the separation of path planning in terms of the direction of the reaching motion and the synergies of joint coordination. Once the MJRM is numerically obtained, the feedfoward planning of reaching direction allows us to provide asymptotically stable, linear trajectories in the entire work space through rotational transformation, completely avoiding the use of inverse kinematics. Our dynamics-driven control law suggests an interesting framework for interpreting human reaching motion control alternative to the dominant inverse method based explanations, avoiding expensive computation of the inverse kinematics and the point-to-point control along the desired trajectories

    The visual geometry of a tool modulates generalization during adaptation.

    Get PDF
    Knowledge about a tool's dynamics can be acquired from the visual configuration of the tool and through physical interaction. Here, we examine how visual information affects the generalization of dynamic learning during tool use. Subjects rotated a virtual hammer-like object while we varied the object dynamics separately for two rotational directions. This allowed us to quantify the coupling of adaptation between the directions, that is, how adaptation transferred from one direction to the other. Two groups experienced the same dynamics of the object. For one group, the object's visual configuration was displayed, while for the other, the visual display was uninformative as to the dynamics. We fit a range of context-dependent state-space models to the data, comparing different forms of coupling. We found that when the object's visual configuration was explicitly provided, there was substantial coupling, such that 31% of learning in one direction transferred to the other. In contrast, when the visual configuration was ambiguous, despite experiencing the same dynamics, the coupling was reduced to 12%. Our results suggest that generalization of dynamic learning of a tool relies, not only on its dynamic behaviour, but also on the visual configuration with which the dynamics is associated

    Gone in 0.6 seconds: the encoding of motor memories depends on recent sensorimotor states.

    Get PDF
    Real-world tasks often require movements that depend on a previous action or on changes in the state of the world. Here we investigate whether motor memories encode the current action in a manner that depends on previous sensorimotor states. Human subjects performed trials in which they made movements in a randomly selected clockwise or counterclockwise velocity-dependent curl force field. Movements during this adaptation phase were preceded by a contextual phase that determined which of the two fields would be experienced on any given trial. As expected from previous research, when static visual cues were presented in the contextual phase, strong interference (resulting in an inability to learn either field) was observed. In contrast, when the contextual phase involved subjects making a movement that was continuous with the adaptation-phase movement, a substantial reduction in interference was seen. As the time between the contextual and adaptation movement increased, so did the interference, reaching a level similar to that seen for static visual cues for delays >600 ms. This contextual effect generalized to purely visual motion, active movement without vision, passive movement, and isometric force generation. Our results show that sensorimotor states that differ in their recent temporal history can engage distinct representations in motor memory, but this effect decays progressively over time and is abolished by ∼600 ms. This suggests that motor memories are encoded not simply as a mapping from current state to motor command but are encoded in terms of the recent history of sensorimotor states
    • …
    corecore